DCLA Meet CIDA : Collective Intelligence Deliberation Analytics

نویسندگان

  • Simon Buckingham Shum
  • Anna De Liddo
  • Mark Klein
چکیده

This discussion paper builds a bridge between Discourse-Centric Learning Analytics (DCLA), whose focus tends to be on student discourse in formal educational contexts, and research and practice in Collective Intelligence Deliberation Analytics (CIDA), which seeks to scaffold quality deliberation in teams/collectives devising solutions to complex problems. CIDA research aims to equip networked communities with deliberation platforms capable of hosting large scale, reflective conversations, and actively feeding back to participants and moderators the ‘vital signs’ of the community and the state of its deliberations. CIDA tends to focus not on formal educational communities, although many would consider themselves learning communities in the broader sense, as they recognize the need to pool collective intelligence in order to understand, and co-evolve solutions to, complex dilemmas. We propose that the context and rationale behind CIDA efforts, and emerging CIDA implementations, contribute a research and technology stream to the DCLA community. The argument is twofold: (i) The context of CIDA work connects with the growing recognition in educational thinking that students from school age upwards should be given the opportunities to engage in authentic learning challenges, wrestling with problems and engaging in practices increasingly close to the complexity they will confront when they graduate. (ii) In the contexts of both DCLA and CIDA, different kinds of users need feedback on the state of the debate, and the quality of the conversation: the students and educators served by DCLA are mirrored by the citizens and facilitators served by CIDA. In principle, therefore, a fruitful dialogue could unfold between DCLA/CIDA researchers and practitioners, in order to better understand common and distinctive requirements. DCLA14 discussion paper: Buckingham Shum, De Liddo & Klein 2 1. Introduction ..................................................................................................................... 2 2. The need for reflective deliberation tools ......................................................................... 3 2.1 The limits of forums and commenting .................................................................................. 3 2.2 The limits of ideation tools .................................................................................................. 4 2.3 Summary ............................................................................................................................. 4 3. Pain points in online deliberation for social innovation .................................................... 5 3.1 It’s hard to visualise a debate or an argument ..................................................................... 5 3.2 Poor summarisation ............................................................................................................ 6 3.3 Poor commitment to action ................................................................................................. 6 3.4 Sustaining participation ....................................................................................................... 7 3.5 Shallow contributions and unsystematic coverage ............................................................... 7 3.6 Poor idea evaluation ............................................................................................................ 8 4. Collective intelligence deliberation platforms .................................................................. 8 4.1 Essence of the approach: semantic hypertext ...................................................................... 8 4.2 Examples of collective intelligence deliberation platforms .................................................. 10 5. Deliberation analytics to quantify the health of a debate ................................................ 17 6. Process-­‐Goal-­‐Exception analysis for deliberation analytics .............................................. 21 6.1 Identify normative process model ...................................................................................... 21 6.2 Identify goals ...................................................................................................................... 22 6.3 Identify exceptions ............................................................................................................. 23 6.4 Identify handlers ................................................................................................................ 23 7. Defining CIDA: top-­‐down and bottom-­‐up ........................................................................ 24 8. Concluding thoughts ....................................................................................................... 33 9. Acknowledgements ......................................................................................................... 33 10. References .................................................................................................................... 33 1. Introduction This discussion paper introduces work in an emerging field, which we will refer to as Collective Intelligence Deliberation Analytics (CIDA), as a contribution to the conversation at the Discourse-Centric Learning Analytics (DCLA) workshop on relevant literature and technology in the DCLA research and design space. Our argument proceeds as follows. All over the world, collectives are forming to tackle the challenge of devising practical, collectively owned solutions to some of society’s most pressing problems. Such conversations are taking place daily at many scales, from small teams, organisations and networks of a few dozen, to several hundred, to even thousands of participants in open consultations, and networks for advocacy or social innovation. This is becoming a recognizable research field. For instance, the European Commission Horizon 2020 R&D programme now has a theme dedicated to the role of technology in building Collective Awareness Platforms for Sustainability and Social Innovation, which is currently supporting the work reported here (EC-Horizon2020), and a series of annual international workshops has been held in the field of organizational collaborative computing, devoted to ideation and deliberation tools for Collective Intelligence (CI) (CICDA, 2012). DCLA14 discussion paper: Buckingham Shum, De Liddo & Klein 3 A contribution that the web makes to such collective intelligence (CI) are new kinds of processes and tools to facilitate deliberation—the generation of ideas, and evidence-based critical evaluation, of potential solutions to issues. In this context, the distinctive strand of our work focuses on addressing the acute limitations of current social platforms that merely invite the submission of comments and ‘ideas’, on which people then vote with simple ‘thumbs up’ or ‘like’ clicks. CI deliberation platforms require participants to build more reflectively on each others’ contributions. Efforts are now under way to develop analytics services that provide ways to make sense of how much progress is being made. This then, is the context of CIDA research. Meanwhile, educational institutions are being challenged on many fronts. A fast moving technological landscape is opening up new possibilities, as documented by numerous horizon-scanning learning technology reports. In higher education the quality of the student experience is under intense scrutiny, as educational systems reflect at national and international levels on their fitness for purpose and value for money. Coupled with calls from business for graduates who are not just academically excellent, but have transferable skills and competencies equipping them for the complexities of the workplace, this is serving as a driver for action research into new models focused on the wholistic design of learning, catalysing academics (Deakin Crick, 2009; Gardner, 1983; Perkins, 1993; Claxton, 2001), national networks and funding programmes. A particular focus of these initiatives is on ‘deeper learning’ that is more authentic in nature (Whole Education, 2014; Hewlett Foundation, 2014; Fullan and Langworthy, 2013)—to the extent that the educator may not know the ‘right answer’ but is learning with the students. Indeed, there may be no knowable right answer, such is the open-ended nature of truly “wicked problems” whose very definition sometimes defies consensus (Rittel, 1972). While accuracy of conceptual understanding remains as important as ever, in the absence of a knowable correct solution, it becomes increasingly important to evidence mastery of the appropriate processes through which one may tackle such open-ended problems. To the extent that knowledge is constructed through the use of language to share and context ideas, discourse becomes a window into the learner’s mind, hence the importance of discourse analytics. This paper therefore explores the DCLA-CIDA interplay: between deliberation platforms and associated analytics in formal educational contexts (DCLA), and informal, applied contexts (CIDA). We explore the proposition that the educational need to scaffold authentic learning on problems that matter, and moreover the need to monitor the quality of that process, converges very naturally with developments in CI deliberation platforms. Both contexts require that analytics make more visible what is going on in the platform for the benefit of participants (whether students or citizens) and those charged with facilitating them (whether teachers or e-participation moderators). At various points we highlight reflections on the differences between DCLA and CIDA use contexts, or what the DCLA or CIDA community may learn from each other. 2. The need for reflective deliberation tools 2.1 The limits of forums and commenting The online discussion spaces we see on the Web today typically provide for the addition of flat listings of comments, listed by date (e.g. comments in Facebook; on web articles; on blog posts), or threaded in a strict tree which can be additionally viewed by ‘subject’ line (e.g. Google or Yahoo Groups; ForumSoftware.org). These are fundamentally chronological views of the unfolding conversation drawing attention to the most recent utterances, but offer no insight into the logical structure of the ideas — the coherence of the argument. At a glance, all one can tell is which threads have most contributions, or have been most active. DCLA14 discussion paper: Buckingham Shum, De Liddo & Klein 4 Learning context: Online learning platforms are similarly dominated by threaded discussion forums, and in recent years, social web style platforms with even simpler flat commenting (e.g. via status updates, blogs or wikis). These do not have affordances that promote reflective contributions, or assist analysis of the state of the debate. 2.2 The limits of ideation tools Another class of CI platform is the Idea Management System (IMS) for creative, collective ideation. These are designed to support grassroots innovation systems for the employees of an organization to deliberate about innovation or for its customers to suggest new products (e.g., Bailey and Horwitz; 2010). Klein (Klein, 2012) has critiqued such tools, noting that when an entire community is participating in the idea management and deliberation process, then a much larger number of ideas are generated, and the selection and judgment can become prohibitively lengthy and time consuming. Bjelland and Wood (2008) studied the process of idea generation at IBM and stressed the critical role played by managers and the large amount of work that they did. The role of facilitators was essential together with the software for identifying and nurturing good ideas as these were generated by the organization. Klein reports also that the work of facilitators can be prohibitively lengthy and time consuming. In the context of one Google project (www.project10tothe100.com), thousands of people from around the world submitted about 154,000 submissions. Google had to recruit 3,000 employees to filter and consolidate the large number of ideas received in a process that put the company nine months behind their original schedule. The work of Convertino (Baez & Convertino, 2012) exemplifies the next generation of IMS which seeks to tackle these bottlenecks. Learning context: The number of ‘likes’ that a contribution to a social/ideation platform receives can be motivating for students, giving them a sense of identity in a group. However, a deeper learning orientation must move beyond the popularity of ideas, and challenge learners about the rationale behind a vote: what is the quality of reasoning that leads a student to vote an idea up or down? Can they make that thinking visible in their discourse, and engage with contrasting views, in an appropriate manner? 2.3 Summary The very low ‘entry threshold’ that all of these tools set for contributing (in order to maximise the number of users signing up, for either commercial purposes, or to maximize participation on democratic grounds) is in tension with the need to promote, and understand, a high quality conversation, which is critical to more advanced forms of collective endeavour. Our interest is in the more complex forms of collective action, socio-political dilemmas where people may disagree on the nature of the problem, what might count as a solution, where there is insufficient data, ambiguity about its trustworthiness, and uncertainty about the impact of actions. The answers to complex questions of this sort are rarely simple, but we propose that their complexity can be managed using techniques to show more clearly the nature of agreement and disagreement, poorly developed lines of reasoning, and where a contribution might make most impact rather than duplicate prior work. Resolving these requires high quality deliberation, which could benefit from more powerful scaffolding. 1 e.g. http://www.spigit.com, http://ideascale.com DCLA14 discussion paper: Buckingham Shum, De Liddo & Klein 5 3. Pain points in online deliberation for social innovation Collective intelligence can be viewed as comprising a spectrum of capabilities that ranges from collective sensing at one end (where a collective pools data on its environment), through sensemaking (interpreting data to identify patterns that warrant action), ideation (developing ideas about which actions to pursue), decision-making (selecting the best actions), and action (implementing these actions in a coordinated effective way) — Figure 1. Figure 1: A spectrum of CI activities At the left side of the spectrum, sensor infrastructures are becoming ubiquitous, while towards the right end, relatively mature online voting and workflow coordination technologies support option prioritization and plan execution. We are currently much less able to collectively make sense of the vast amounts of data now available to us, and to come up with innovative and effective arguments for solutions, especially in domains where there are many competing perspectives on how to understand and solve a problem (Tversky and Kahneman, 1974) (Sunstein, 2006) (Schulz-Hardt et al., 2000) (Cook and Smallman, 2007) (Klein, 2012). As part of the European Union Catalyst project, we have recently completed a requirements consultation with five organisations who specialize in hosting and moderating medium-large scale online conversations for social innovation collectives. Five workshops with on average 10 participants followed a methodology in which they were asked to prioritise the most pressing “pain points” that they experience in facilitating such community deliberations. We summarise next the highest priority issues, and the reader is invited to consider how closely these parallel the challenges of scaffolding quality student discourse in more formal learning contexts. 3.1 It’s hard to visualise a debate or an argument The consultees confirmed that the effective visualisation of concepts, new ideas and deliberations is essential for shared understanding, but suffers both from a lack of efficient tools to create them and from a lack of ways to reuse them across platforms and debates. Yet, most users of these platforms (multilingual, multicultural communities, Generation Y, etc.) wish to have access to easy-to-understand image/videobased content that they can grasp very rapidly and share easily with their peers via social media channels. Many consultees reported that poor summarisation and visualisation are “the biggest problems as these both result in platforms which are unappealing to the user, and therefore suffer from a lack of participation”. In particular, visualisation of the debate was considered important for both the community who participate to the debate and community manager. Some claimed that: “As a user, visualisation is my DCLA14 discussion paper: Buckingham Shum, De Liddo & Klein 6 biggest problem. It is often difficult to get into the discussion at the beginning. As a manager of these platforms, showing people what is going on is the biggest pain point.” A good argument has particular attributes that are well understood by scholars of informal logic and other kinds of argumentation (Walton et al., 2008). Normally, of course, arguments are expressed in prose, and it is left to the reader to tease apart the elements in order to form a judgement about the line of reasoning. However, it is well established that the general public have poor argumentation and critical reasoning skills (Kuhn, 1991; Rider and Thomason, 2008). One of the technical goals of CIDA platforms is, consequently, to find more effective ways of rendering arguments, in order to make them more tractable for both humans and computational intelligence to parse and evaluate. Learning context: Somewhat disappointingly, university students are no exception to the finding that most citizens have poor reasoning skills. Undergraduate students find it challenging to write in the appropriately critical manner that is the hallmark of academic writing (Andrews, 2010; Lillis and Turner, 2001) 3.2 Poor summarisation Summarisation is a prerequisite to informed participation in online debates. Participants struggle to get a good overview of what is unfolding in an online community debate. Only the most motivated participants will commit a lot of time to reading the debate in order to identify the key members, the most relevant discussions, etc. The majority of participants tend to respond unsystematically to stimulus messages, and do not digest earlier contributions before they make their own contribution to the debate, such is the cognitive overhead and limited time. This problem is crucial since it also influences other pain points such as idea duplication, shallow idea contribution and therefore poor participation. Learning context: The demands of reading prior contributions before posting one’s own apply as equally to student forums as others. The difference may, however, be that student forums can be more tightly controlled either through imposing practices (e.g. “your postings must engage with ideas in at least two previous postings”; your postings should seek to forge meaningful connections between earlier ones”), or through imposing technical constraints on users permissions (“you will be randomly assigned to a team of 6 who can only access their team forum”). 3.3 Poor commitment to action Decision-support systems that underestimate the complexity of socio-technical problems suffer from problem definitions not being collectively owned (Conklin, 2006), hence our focus on high-quality deliberation process. However, even once candidate courses of action are clear, bringing motivated audiences to commit to action is difficult. Enthusiasts, those who have an interest in a subject but have yet to commit to taking action, are left behind. This was a central issue for most of the participants. Participants cared about ways to prompt action in community members, to the extent that reaching a consensus was considered less important than being enabled to act. Learning context: The seriousness of this problem in educational settings may depend on how much autonomy students are given. In increasingly authentic learning contexts, one would expect greater student choice about the direction and focus of a project, and hence the growing importance of commitment to act to achieve that goal. DCLA14 discussion paper: Buckingham Shum, De Liddo & Klein 7 3.4 Sustaining participation About 80% of the workshop participants considered lack of participation either important or very important. Of these, 40% considered it crucial. Even with optimal methodologies, it is typical that only a fraction of any group will actively participate in online deliberation, as reflected in the well known ‘long tail’ distribution of social media activity, whereby a very small percentage of participants are highly active. Moderators reported that motivating participants with widely differing levels of commitment, expertise and availability to contribute to an online debate is challenging and often unproductive. Participation is usually one-off, and maintaining presence as well as interest in the deliberation is a challenge. The focus of interest in the workshops seemed to be more on how to maintain participation rather than on how to enlarge participation. As a general and recurring comment, also across workshop groups, many claimed: “it is better to have quality input from a small group than a lot of members but very little content”. The real issue was considered to be “the lack of worthwhile and productive input. For example, ‘liking’ something on Facebook is a way of participating, but it is not necessarily that productive”. Several participants suggested that, in terms of priorities, rather then seeking simply to engage more and more participants in the conversation, it is more important that the actions discussed are then followed through and implemented, and that the platform should allow users to track the outcomes of those actions. Learning context: In an earlier paper (Buckingham Shum and Deakin Crick, 2012) we noted the disturbing levels of disengagement in the schools of many developed countries (Gilby et al., 2008; Willms et al., 2009; Yazzie-Mintz, 2009). These point to a widening disconnect between what motivates and engages many young people, and their experience of schooling. Such students, including the high ability but disengaged, want to know the point of bothering to reengage, and are thus analogous to the hard-to-engage citizens that the consultees spoke about. This sustaining participation pain point thus points to the importance of (i) making learning authentic, and (ii) fostering forms of engagement that go beyond the lowest common denominator like act, or superficial comment. 3.5 Shallow contributions and unsystematic coverage Open innovation systems tend to generate a large number of relatively shallow ideas. Open innovation sites do not in general encourage or support the collaborative refinement of ideas that could allow the development of more refined, deeply considered contributions. The majority of the contributions thus tend to be repetitions of a few obvious ideas. Moreover, there is no inherent mechanism for ensuring that the ideas submitted comprehensively cover the different facets of the problem at hand. The space of possible solutions is generally not specified up front, and there is no easy way for potential contributors to see which problem facets remain under-covered. The repetition mentioned above is detrimental to consideration of how similar ideas actually differ (improving understanding) or focusing on new ideas. As a result, social innovation systems often produce very partial coverage of the solution space. About 70% of consulted people considered this problem important or very important to enable effective online deliberation while the remaining 30% considered this of moderate or minor importance. Learning context: This pain point is highly relevant to learning contexts requiring the rational analysis of a well-understood problem space. The task here is to understand the key dimensions that may be used to differentiate one important class of solution from another, and/or the hierarchies that may be used to clarify how different solutions compare. DCLA14 discussion paper: Buckingham Shum, De Liddo & Klein 8 It is important to note, however, that in a learning context, shallow contributions — in terms of social exchanges not specifically about the curriculum material — can play helpful roles by helping to foster a socially welcoming online environment, or building a community of practice: they build social capital, and create the conditions of trust in which learners are more likely to be open to challenge and new ideas. Such qualities have a role to play in authentic, socially rich learning contexts, but are less valued in assessments for which this is considered irrelevant. Shallow comments would therefore normally be considered weak contributions (unless the course is, for instance, about learning to host social learning spaces, resolve conflict, facilitate professional learning, etc.). However, if building trust and social capital is not the end of the story, they are preconditions to subsequent contributions becoming “deeper”, for instance, evidencing a grasp of more complex concepts, modes of critical thinking and argumentation, or more challenging forms of discourse that might undermine one’s preconceptions or threaten a worldview. 3.6 Poor idea evaluation When there are thousands of ideas, as is common for innovation about complex problems, many emergent effects can deeply undercut the value of community ratings. Most people are likely to evaluate only a tiny fraction of the ideas, usually the ones at the top of the list. If, as is often the case, ideas are sorted by their average rating, one can expect that the system will quickly “lock” into a fairly static, and arbitrary, ranking, where the first few winners take all, even if they are inferior to many other ideas in the list. Even worse, stakeholders with vested interests can game the rating mechanisms in open innovation systems in order to manipulate which ideas rise to the top. A single idea post with focused voting, for example, may beat out a better idea that had its votes spread over many redundant instantiations. In an open ideation engagement, there is also often a disconnection between the voting of the contributors, and the idea evaluation criteria (often implicit) held by the person who voted. This problem is exacerbated when, as is often the case, people evolve their understanding of what they want as they learn more about the space of possible solutions. In general, current open innovation systems provide little support for a varied crowd building upon each other’s evaluative expertise, or mentoring one another. They cannot see why other contributors provided the ratings that they did, nor is there a good way for them to examine and correct each other’s facts and reasoning. At best, current open innovation systems just provide comment streams to capture discussions about the worth of an idea, and these quickly become unwieldy as the number of comments on an idea increases. Among the workshop participants, while this was not ranked as high as preceding pain points, approximately 60% ranked this “important” or “moderately important”. 4. Collective intelligence deliberation platforms 4.1 Essence of the approach: semantic hypertext The inaugural conference devoted to Collective Intelligence (CI), defined it as: “...behaviour that is both collective and intelligent. By collective, we mean groups of individual actors, including, for example, people, computational agents, and organizations. By intelligent, we mean that the collective behaviour of the group exhibits characteristics such as, for example, perception, learning, judgment, or problem solving.” [www.ci2012.org] We have previously drawn a distinction between CI derived from the aggregation of low level click traces left as a by-product of user activity, and CI emerging from intentional, user acts of synthesis and interpretation (De Liddo and Buckingham Shum, 2010). The latter is the focus of CIDA efforts, since DCLA14 discussion paper: Buckingham Shum, De Liddo & Klein 9 wicked problems require significant intellectual effort in the construction of plausible narratives that make sense of data, which depending on the context may be scarce or abundant—but it is its trustworthiness and significance which must be decided. There is an established research literature that provides key insights into the problem of making the structure and status of such deliberation visible. This focuses on the design of semiformal representations designed to aid both human and machine interpretation. Elsewhere, we have documented the roots to this field (Buckingham Shum, 2003). Explicit semantic networks provide a computational system with a more meaningful understanding of the relationships between ideas than natural language. Following the established methodological value of Concept Mapping (Novak, 1998), the mapping of issues, ideas and arguments extends this to make explicit the presence of more than one perspective and the lines of reasoning associated with each. More formal approaches, derived from the convergence of AI and argumentation theory (Walton et al., 2008) model argument structures in finer detail, thus enabling automated evaluation. However, a longstanding research challenge is to add such computational power without sacrificing usability for non-experts. A relatively simple ‘lightweight’ scheme for structuring deliberation has emerged as a popular choice for deliberation platforms. This uses a scheme originally devised in the 1970s called the Issue-Based Information System (IBIS) (Rittel, 1984 (1972); Rittel and Webber, 1973). The key elements of this form of mapping conversational moves are shown in Figure 1. Figure 1: Core elements of a map using the IBIS scheme, underpinning many deliberation platforms, which render the structures using diverse visualizations An extensive literature documents the adoption issues that some users face when they attempt this level of discourse structuring in synchronous face-to-face contexts (Buckingham Shum et al., 2006). Counterbalancing this are accounts of high impact mapping by ‘cartographers’ who are fluent with this, around which we are now developing a theoretical account and evidence base (Selvin et al., 2010). There is a growing body of evidence showing under what conditions tools which structure dialogue and argumentation into visualizations can support groups to build shared understanding, explore solutions to complex problems, record their rationale, and make better informed collective decisions (Okada et al., 2008). These include e-participation/e-democracy (Renton and Macintosh, 2007; Iandoli et al., 2009) environmental policy (Conklin, 2003), participatory urban planning (Culmsee and Awati, 2011), distributed science in the field (Sierhuis and Buckingham Shum, 2008), and emergency response (Tate et al., 2007). This line of work has led us to propose that we need to develop a distinctive class of CI system for Contested CI in which consensus cannot be assumed, but rather, the norm is to have perspectives in DCLA14 discussion paper: Buckingham Shum, De Liddo & Klein 10 tension, the central task being to make those visible in order to understand them, with human and machine intelligence harnessed to this end (De Liddo et al., 2012). The emergence of IBIS as a ‘lingua franca’ in CI deliberation platforms reflects its relative simplicity, and hence comprehensibility and usability, compared to richer, more expressive argument modeling approaches, which require correspondingly higher skill levels. 4.2 Examples of collective intelligence deliberation platforms To make these technologies more tangible, let us give a few examples, noting that there is a growing range of tools, some of which are in use by citizens with no training. DebateGraph has been used to host public debates around political policy and other societal dilemmas, in association with government consultations, and many other public and not-for-profit sector organisations Figure 2. DebateGraph is being used in over 100 countries in many different fields, including education, health, governance, media, publishing, environment, conflict resolution, conferences, group facilitation, and public consultation and planning. Figure 2: DebateGraph supporting a White House online consultation 2 e.g. Online Deliberation: Emerging Tools: ODET 2010 workshop: www.olnet.org/odet2010 Arguing on the Web 2.0: ISSA 2014 workshop: http://www.sintelnet.eu/content/arguing-web-20-0 DCLA14 discussion paper: Buckingham Shum, De Liddo & Klein 11 These deployments confirm that untrained users are willing to engage with more structured deliberation platforms when they care enough about the issues at stake, but there has not yet been academic research into usage patterns. Another IBIS network mapping application is Cohere (Figure 3), which adds web annotation functionality (to directly insert clips from websites into the map) and social network analysis. The goal is to produce aggregated, integrated views of both the social and discourse dynamics. For instance, while social network analysis networks typically show social ties based on direct interaction of some sort, undifferentiated except by the strength of the tie, this example shows a ‘semantic social network’ visualization of the kinds of social ties connecting participants, based on their agreement or disagreement around ideas (see colourcoded green/red/grey links) (De Liddo et al., 2011). Figure 3: Cohere’s Semantic Social Network view colour-codes the ties between people based on an algorithm weighting how they are connected via semantic argumentation moves The OU’s Evidence Hub (De Liddo and Buckingham Shum, 2013a) has been designed as CI site for pooling fragments of clippings from the web, and the posting of issues and ideas, but then scaffolds the process of moving from these into ‘evidence-based stories’ which make meaningful, argumentative connections between these elements. Since a harmonious ‘big picture’ rarely emerges in contested fields such as education, business or policy making, the Evidence Hub aims also to show where people disagree and why, and where there appears to be a lack of evidence to back any of the claims. Figure 4 shows the story wizard which scaffolds the user through the process of submitting an evidencebased claim. DCLA14 discussion paper: Buckingham Shum, De Liddo & Klein 12 Figure 4: The Evidence Hub’s Story Wizard scaffolds the submission of an IBIS map without users needing to understand the underlying semantics An example of the IBIS-based structure generated by the story wizard is shown in Figure 5, rendered as discussion forum-style threads, with added semantic types from IBIS: Figure 5: An Evidence Hub outline knowledge tree view of the IBIS structure. Graph visualization and a zoomed-in details view are also provided. DCLA14 discussion paper: Buckingham Shum, De Liddo & Klein 13 A number of empirical studies now document the costs and benefits of different versions’ user interfaces (Iandoli et al., 2013; De Liddo and Buckingham Shum, 2013b). MIT’s Deliberatorium also uses the IBIS scheme, and has been evaluated in both educational contexts and as a medium for public political discourse (Iandoli et al., 2009; Klein et al., 2012). Figure 6 is taken from a city-wide deployment for citizen debate in an Italian election. Figure 6: Deployment of Deliberatorium in citizen debate during a Naples city election While the structure that IBIS introduces requires greater effort from participants, evaluation of this deployment in comparison to a conventional web forum: “suggests that argument maps, despite the additional demands they place on users in terms of structuring their contributions, do not suppress participation compared to the much simpler, and more familiar, medium of web forums. This is, we believe, an important result in terms of the viability of argument-centric large-scale deliberation, and is also potentially relevant to the viability of other social computing approaches (e.g. collaboratively-authored semantic networks) that use semiformal representations.” After years of experimenting with the design and usage of graphical argument maps to promote critical thinking in public and educational discourse (van Gelder, 2003), Tim van Gelder is now concentrating on more conventional, simpler interfaces for civic debate. YourView offers a two-column interface to see, for each proposed idea, the list of supporting and challenging arguments (Figure 7). 3 YourView: https://yourview.org.au Blogs: http://timvangelder.com/category/yourview/ DCLA14 discussion paper: Buckingham Shum, De Liddo & Klein 14 Figure 7: YourView’s two-column view of for/against arguments Although IBIS is relatively simple as a modeling scheme, additional visualizations can be generated from it. It is possible to start mapping peoples’ positions compared to a norm, such as how much you agree or disagree with peers (Figure 8), or with the addition of domain-specific data, how citizens’ votes are close to a specific political party view (Figure 9). DCLA14 discussion paper: Buckingham Shum, De Liddo & Klein 15 Figure 8: Consider.it group’s thinking histogram groups people in terms of how much they support or challenge an idea, and shows the pros and cons each group appeals to. DCLA14 discussion paper: Buckingham Shum, De Liddo & Klein 16 Figure 9: YourView’s Election Panorama shows how peoples’ votes locate in the landscape of political parties, other organisations, and YourView users. The Panorama also displays where all participants collectively stand. In a nutshell, IBIS based deliberation platforms are collective intelligence platforms for what van Gelder terms “deliberative aggregation”: they harness the deliberative power of communities and make them smarter by providing argumentation’s feedback, analytics and visualizations, in the very attempt to improve collective reflection and awareness for the public good. The DCLA community could take inspiration from the kinds of user experiences that these tools offer, since they are designed for members of the public with no formal training, outside the constraints and requirements associated with the formal educational contexts in which CSCL research is conducted. One might hypothesise that students will find such tools engaging and closer to the quality of user experience that they are accustomed to on the web at large, and in other social media specifically. The question when migrating these tools into educational contexts is what if anything needs to be added to create, and evidence, learning. That of course is determined by the kinds of learning one is aiming to deliver. Advances within CSCL and AI in Education argumentation research have much to offer to the DCLA-CIDA dialogue, as discussed later. 4 http://timvangelder.com/category/deliberative-aggregator DCLA14 discussion paper: Buckingham Shum, De Liddo & Klein 17 5. Deliberation analytics to quantify the health of a debate Even moderately complex societal challenges can involve scores of problems to solve, hundreds of possible solutions, and thousands of arguments for and against these possible solutions. A critical challenge for making sense of such a deliberation is thus attention allocation. How can users know which particular topics are most in need of attention? Which areas are progressing well, and which may require some kind of intervention? Which parts part of a summary map is “mature” (i.e. comprehensively covers the key problems, solutions, and arguments) and thus ready to studied in detail? We are seeking to develop deliberation analytics, by which we mean algorithms that yield analytics of quality, and map them to personalized attention mediation suggestions. If these algorithms work effectively, participants should be more aware of where their efforts can do the most good, to help maximize the collective intelligence of the system. Both the DCLA and CIDA communities should build on the advances in argumentation analytics developed at the intersection of computer-supported collaborative learning (CSCL) and artificial intelligence and education (AIED). Working in formal educational contexts, an established strand of research has sought to scaffold student argumentation and discussion skills through a variety of techniques which will be of great interest to the learning analytics community. The goals of these efforts overlap significantly with those of CIDA, as summarised in this substantive review of the approaches, progress and challenges in the field [REF]. For example, a recent literature review (Scheuer et al., 2012) helpfully summarises the approaches that have been taken within the CSCL/AIED research communities to automated analysis of semantic argument models (Table 1). Table 1. Overview of analysis approaches for argument models (Scheuer et al., 2012) Analysis Approach Description Syntactic analysis Rule-based approaches that find syntactic patterns in argument diagrams Systems: Belvedere, LARGO Problemspecific analysis Use of a problem-specific knowledge base to analyze student arguments or synthesize new arguments Systems: Belvedere, LARGO, Rashi, CATO Simulation of reasoning and decision making processes Qualitative and quantitative approaches to determine believability / acceptability of statements in argument models Systems: Zeno, Hermes, ArguMed, Carneades, Convince Me, Yuan et al. (2008) Assessment of content quality Collaborative filtering, a technique in which the views of a community of users are evaluated, to assess the quality of the contributions’ textual content Systems: LARGO Classification of the Classification of the current phase a student is in according to a predefined DCLA14 discussion paper: Buckingham Shum, De Liddo & Klein 18 current modeling phase process model Systems: Belvedere, LARGO With specific reference to the need to help moderators prioritise which parts of multiple discussions require their attention, consider the ARGUNAUT tool below, which provides a moderator’s interface which will be of interest to CIDA researchers (Figure 10), rendering the results of both shallower and deeper analysis of students’ graphical argument maps (McLaren et al., 2010). Figure 10: Prototype argument mapping moderator’s interface using automated parsing of the argument graph to generate visual analytics (McLaren et al., 2010). This is described as follows by MacLaren et al.: “A teacher can monitor multiple ongoing discussions in parallel using a tool called the “Moderator’s Interface” [...] The teacher can toggle between the different ediscussions by selecting the different groups shown in the list on the left (i.e., Bio group a, Bio group b, etc.). Within each discussion, important aspects are visualized as awareness displays. These displays are presumed to be helpful to a teacher as he or she tries to find pedagogically meaningful aspects of the discussion (e.g., critical thinking, dialogism). In [the figure], on the right, three awareness displays are shown. The graph in the upper right shows the frequency with which students in the currently selected discussion have responded to one another’s contributions, as well as which students are most in the center of the discussion through frequent responses to others’ contributions. The middle right graph shows a DCLA14 discussion paper: Buckingham Shum, De Liddo & Klein 19 comparison of the number of contributions made by each student, a rough indication of student engagement, while the graph in the lower right shows a comparison of the types of contributions made by students, a rough indication of how students are engaging with one another (e.g., Are the students asking one another questions? Making arguments?).” In the Process-Goal-Exception analysis approach we describe next, several of the above techniques are considered, and we will also benefit from more deeply understanding their strengths and limits when scaled to the larger groups of citizens involved in CIDA. Also of direct interest to CIDA, especially given the scaling challenge, MacLaren et al document a range of machine learning approaches they tested to train classifiers to recognize increasingly complex map elements, from single nodes to pairs, to larger structures. Consider this extract reporting their analytics for argumentative moves that would also be of relevance to CIDA systems Table 2. DCLA14 discussion paper: Buckingham Shum, De Liddo & Klein 20 Table 2: Extract from (McLaren et al., 2010) showing examples of argument map network structures which they trained automated classifiers to recognise In our work, we are exploring the strengths and weaknesses of several approaches: 1. machine learning on natural language discourse, e.g. textchat (Ferguson et al., 2013) 2. rhetorical parsing of metadiscourse in natural language, e.g. discussion forums, argument maps, and research reports (Simsek et al., 2013; De Liddo et al., 2012) 3. rule-based agents monitoring IBIS structures in order to diagnose the health of a deliberation and make recommendations as to where attention might best be directed (Klein, 2003; 2012) DCLA14 discussion paper: Buckingham Shum, De Liddo & Klein 21 Since the first two approaches have been reported to the LAK community already we will elaborate the third approach, in particular because it is a new approach to DCLA, and introduces new ways of thinking about deliberation quality from fields such as organization science and group psychology. 6. Process-­‐Goal-­‐Exception analysis for deliberation analytics Deliberation analytics can be identified using Process-Goal-Exception analysis, a technique developed by Klein (Klein, 2003). The key idea is that analytics can be viewed as the processes we put in place to identify, and respond, when a process deviates from its ideal functioning. This methodology allows us to identify process deviations and their associated responses in a systematic way that fosters complete coverage (Figure 11). Figure 11: Process-Goal-Exception analysis, the methodology used for identifying analytics. 6.1 Identify normative process model The first step is to identify a model of how the target process should work. The core process we want to support is social innovation. Our model of this process consists of the following subtasks (Eemeren and Grootendorst, 2003; Walton and Krabbe, 1995): 1. Identify problems to solve 2. Identify possible solutions for these problems 3. Evaluate the candidate solutions 4. Select the best solution(s) from amongst the candidates 5. Enact the selected solution(s) 6. Learn from experience This model is potentially iterative: enacting a selected solution (step 5) can, for example, lead the community to identify new problems to solve (step 1). Note also that social innovation engagements will necessarily include all these steps: it depends on the purpose of the engagement (Conklin, 2005), which can for example include: Brainstorming: create a list of solution options for a problem (step 2). Examples of this include strategic crowdsourcing in a company (before prioritization and decision by executive committee), or public consultation for a City. This can include using creativity techniques such as recombining known ideas Argumentation: debate the relative merit of competing solution options (step 3). This can include the use of simulation and forecasting tools to assess the probable impact of the options under consideration. identify normative process model identify ideal goals for each subtask identify possible exceptions for each goal process decomposition process model with goals process model with goals and exceptions identify handlers for each exception DCLA14 discussion paper: Buckingham Shum, De Liddo & Klein 22 Decision -making: select the preferred option from among a menu of alternatives (step 4). Design enhancement: refine an existing solution design (i.e. start with step 5, and then loop back to step 1). The social innovation process includes two key sub-processes. One is harvesting, wherein participants feed content, e.g. found in conventional social media, into the social innovation system. The harvesting sub-process consists of the following subtasks:

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Discourse-Centric Learning Analytics: Mapping the Terrain

There is an increasing interest in developing learning analytic techniques for the analysis, and support of, high‐quality learning discourse. This paper maps the terrain of discourse‐centric learning analytics (DCLA), outlining the distinctive contribution of DCLA and outlining a definition for the field moving forwards. It is our claim that DCLA provides the opportunity to explore the wa...

متن کامل

Government-Driven Participation and Collective Intelligence: A Case of the Government 3.0 Initiative in Korea

The Park Geun-hye Administration of Korea (2013–2017) aims to increase the level of transparency and citizen trust in government through the Government 3.0 initiative. This new initiative for public sector innovation encourages citizen-government collaboration and collective intelligence, thereby improving the quality of policy-making and implementation and solving public problems in a new way....

متن کامل

The procedural epistemic value of deliberation

Collective deliberation is fuelled by disagreements and its epistemic value depends, inter alia, on how the participants respond to each other in disagreements. I use this accountability thesis to argue that deliberation may be valued not just instrumentally but also for its procedural features. The instrumental epistemic value of deliberation depends on whether it leads to more or less accurat...

متن کامل

Big Data Analytics on the Characteristic Equilibrium of Collective Opinions in Social Networks

Big data are products of human collective intelligence that are exponentially increasing in all facets of quantity, complexity, semantics, distribution, and processing costs in computer science, cognitive informatics, web-based computing, cloud computing, and computational intelligence. This paper presents fundamental big data analysis and mining technologies in the domain of social networks as...

متن کامل

Deliberating Collective Decisions

We present a dynamic model of sequential information acquisition by a heterogeneous committee. At each date agents decide whether to vote to adopt one of two alternatives or continue to collect more information. The process stops when a qualified majority vote for an alternative. Three main insights emerge from our analysis and match an array of stylized facts on committee decision making. Firs...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2014